Despite only being three years into the decade, the 2020s has already cemented itself as a turning point in the artificial intelligence (AI) arms race — from the release of OpenAI’s ChatGPT and its subsequent GPT-4 model, to text-to-image generators Midjourney and DALL-E and countless other examples.
These increasingly powerful AI tools and models have effectively made ‘work’ easier across all industries. From coding to writing — and other generative functions — AI is now being embraced by businesses and professionals around the globe.
At the same time, cybercriminals have begun to take advantage of these tools for more nefarious means. With a new set of tools under their belt that can be deployed almost instantaneously, a new call for vigilance is needed to protect against the rise of these threats, fakes and imposters. Based on nearly a decade of using AI to fight AI-related fraud, here are a few tips to fight back and protect your organization.
Connecting the dots between AI and phishing attacks
First, let’s examine the risks:
Phishing is a malicious social engineering practice that dupes consumers into sharing personally identifiable information, like one’s credit card number, social security number or access to one’s bank account.
Now in a digital-first era, online scams are more sophisticated phishing attacks, often existing in the form of social media impersonations (think fake social media accounts that impersonate a company or employee). In many cases, these scams are executed through direct messages, prompting users to redirect to a malicious website or convincing users to send money through digital or cryptocurrency wallets.
Phishing attacks have increased dramatically in recent years, with phishing attacks now occurring once every eleven seconds. And natural language processing (NLP) solutions — a branch of artificial intelligence that seeks to understand text and spoken words — are the primary culprit behind this trend. From the effective capabilities of generative AI tools to their inexpensiveness, it’s now easier than ever to create scam websites that appear professional and legitimate — thereby making it easier to dupe consumers.
Security leaders are also seeing a new arsenal of cybersecurity weapons, allowing bad actors to communicate more frequently and effectively and appear more authentic to trick users. Complementing this trend are generative solutions that enable fraudsters to create code and quickly build the infrastructure for digital scams at unprecedented rates.
Take, for example, the late-2021 efforts by Russian hacking ground FIN7, which created a fake cybersecurity solution to dupe companies seeking security and enlist genuine talent, unaware they were working on expanding a large ransomware operation.
The state of AI-powered scams and looking ahead
Increasingly powerful AI models and generative AI tools have provided fraudsters with the framework for designing and creating scams with unparalleled speeds and accuracy. Generative tools like OpenAI’s ChatGPT have garnered attention for their ability to create scripts — that are both grammatically accurate and appear professional — to be used to dupe consumers on social media platforms or over email. Yet, the capabilities of modern AI in producing successful scams extend far beyond script generation.
Take voice cloning scams, for instance. A popular tool in a bad actor’s arsenal, these scams can effortlessly replicate an individual’s voice using as little as a brief audio clip from one’s social media page. Along the same lines as traditional ‘family emergency’ ruses, this tactic is often highly successful — largely because of recent advancements in generative AI.
Voice cloning is only one of the many examples of AI-powered scams. And as investors and institutions prepare to pour hundreds of billions of dollars annually into the AI arms race, businesses and consumers alike must prepare for a world where human and AI interactions are indistinguishable from one another.
Fighting AI with AI
As discussed, recent advancements in AI have effectively thrown a curveball at the cybersecurity space. With new, powerful tools ready to deploy, fraudsters can create and execute threats faster and with higher degrees of success.
Now, it’s time for cybersecurity professionals to fight fire with fire or risk being overwhelmed by a flurry of new threats.
By incorporating AI-powered cybersecurity solutions, professionals can stay ahead of the curve. From detecting social media impersonation to detecting scam websites, companies can bring forth more efficient workflows and stronger tactical remediation efforts, leading to increased success rates — all thanks to AI technology.
By staying informed of advancements in AI and security threats and investing in the right solutions and practices, businesses can effectively safeguard their reputation and financial assets from cybercriminals. Here are a few simple reminders that chief information security officers (CISOs) of every organization should follow:
- Adopt holistic solutions that cover all digital threats including impersonations, phishing attacks, counterfeits and so forth.
- Explore solutions that utilize AI to improve detection and tracking efforts that help identify hidden networks and pathways.
- Take a proactive approach to cybersecurity efforts, including early detection and remediation initiatives — the longer security leaders wait to address threats, the more they expose themselves and their company to risk.
Phishing attacks, impersonations and the frequency of these attacks will only increase as technology advances. However, implementing these fairly easy steps now can prepare security leaders for current and future events, and help turn the tide against AI-related scams.